Un-regularizing: approximate proximal point algorithms for empirical risk minimization A. Derivation of regularized ERM duality

ثبت نشده
چکیده

For completeness, in this section we derive the dual (5) to the problem of computing proximal operator for the ERM objective (3).

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization

We develop a family of accelerated stochastic algorithms that optimize sums of convex functions. Our algorithms improve upon the fastest running time for empirical risk minimization (ERM), and in particular linear least-squares regression, across a wide range of problem settings. To achieve this, we establish a framework, based on the classical proximal point algorithm, useful for accelerating ...

متن کامل

Distributed Block-diagonal Approximation Methods for Regularized Empirical Risk Minimization

Designing distributed algorithms for empirical risk minimization (ERM) has become an active research topic in recent years because of the practical need to deal with the huge volume of data. In this paper, we propose a general framework for training an ERM model via solving its dual problem in parallel over multiple machines. Our method provides a versatile approach for many large-scale machine...

متن کامل

A Dual Augmented Block Minimization Framework for Learning with Limited Memory

In past few years, several techniques have been proposed for training of linear Support Vector Machine (SVM) in limited-memory setting, where a dual blockcoordinate descent (dual-BCD) method was used to balance cost spent on I/O and computation. In this paper, we consider the more general setting of regularized Empirical Risk Minimization (ERM) when data cannot fit into memory. In particular, w...

متن کامل

Asynchronous Stochastic Proximal Optimization Algorithms with Variance Reduction

Regularized empirical risk minimization (R-ERM) is an important branch of machine learning, since it constrains the capacity of the hypothesis space and guarantees the generalization ability of the learning algorithm. Two classic proximal optimization algorithms, i.e., proximal stochastic gradient descent (ProxSGD) and proximal stochastic coordinate descent (ProxSCD) have been widely used to so...

متن کامل

From Dual to Primal Sub-optimality for Regularized Empirical Risk Minimization

Regularized empirical risk minimization problems are fundamental tasks in machine learning and data analysis. Many successful approaches for solving these problems are based on a dual formulation, which often admits more efficient algorithms. Often, though, the primal solution is needed. In the case of regularized empirical risk minimization, there is a convenient formula for reconstructing an ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015